Bayesian Neural Networks with Maximum Mean Discrepancy regularization

نویسندگان

چکیده

Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights instead of a single set, having significant advantages in terms of, e.g., interpretability, multi-task learning, and calibration. Because the intractability resulting optimization problem, most BNNs either sampled through Monte Carlo methods, or by minimizing suitable Evidence Lower BOund (ELBO) on variational approximation. In this paper, we propose variant latter, wherein replace Kullback-Leibler divergence ELBO term with Maximum Mean Discrepancy (MMD) estimator, inspired recent work inference. After motivating our proposal based properties MMD term, proceed show number empirical proposed formulation state-of-the-art. particular, achieve higher accuracy multiple benchmarks, including several image classification tasks. addition, they more robust selection prior weights, better calibrated. As second contribution, provide new for estimating uncertainty given prediction, showing it performs fashion against adversarial attacks injection noise inputs, compared classical criteria such as differential entropy.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Training generative neural networks via Maximum Mean Discrepancy optimization

We consider training a deep neural network to generate samples from an unknown distribution given i.i.d. data. We frame learning as an optimization minimizing a two-sample test statistic—informally speaking, a good generator network produces samples that cause a twosample test to fail to reject the null hypothesis. As our two-sample test statistic, we use an unbiased estimate of the maximum mea...

متن کامل

Optimized Maximum Mean Discrepancy

We propose a method to optimize the representation and distinguishability of samples from two probability distributions, by maximizing the estimated power of a statistical test based on the maximum mean discrepancy (MMD). This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a ...

متن کامل

Maximum Mean Discrepancy Imitation Learning

Imitation learning is an efficient method for many robots to acquire complex skills. Some recent approaches to imitation learning provide strong theoretical performance guarantees. However, there remain crucial practical issues, especially during the training phase, where the training strategy may require execution of control policies that are possibly harmful to the robot or its environment. M...

متن کامل

Bayesian Regularization in Constructive Neural Networks

In this paper, we study the incorporation of Bayesian reg-ularization into constructive neural networks. The degree of regulariza-tion is automatically controlled in the Bayesian inference framework and hence does not require manual setting. Simulation shows that regular-ization, with input training using a full Bayesian approach, produces networks with better generalization performance and low...

متن کامل

Testing Hypotheses by Regularized Maximum Mean Discrepancy

Do two data samples come from different distributions? Recent studies of this fundamental problem focused on embedding probability distributions into sufficiently rich characteristic Reproducing Kernel Hilbert Spaces (RKHSs), to compare distributions by the distance between their embeddings. We show that Regularized Maximum Mean Discrepancy (RMMD), our novel measure for kernel-based hypothesis ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neurocomputing

سال: 2021

ISSN: ['0925-2312', '1872-8286']

DOI: https://doi.org/10.1016/j.neucom.2021.01.090